103 research outputs found

    High-Performance Computing for the simulation of particles with the Discrete Element Method

    Get PDF
    In this talk, we will give an overview of the main techniques used for the parallelization of numerical simulations on High-Performance Computing platforms, and provide a particular focus on the Discrete Element Method (DEM), a numerical method for the simulation of the motion of granular materials. We will cover the main parallelization paradigms and their implementations (shared memory with OpenMP and distributed memory with MPI), present the performance bottlenecks and introduce load-balancing techniques

    Optimized Coordinated Checkpoint/Rollback Protocol using a Dataflow Graph Model

    Get PDF
    Fault-tolerance protocols play an important role in today long runtime scienti\ufb01c parallel applications. The probability of a failure may be important due to the number of unreliable components involved during an execution. We present our approach and preliminary results about a new checkpoint/rollback protocol based on a coordinated scheme. The application is described using a dataflow graph, which is an abstract representation of the execution. Thanks to this representation, the fault recovery in our protocol only requires a partial restart of other processes. Simulations on a domain decomposition application show that the amount of computations required to restart and the number of involved processes are reduced compared to the classical global rollback protocol

    The XDEM Multi-physics and Multi-scale Simulation Technology: Review on DEM-CFD Coupling, Methodology and Engineering Applications

    Get PDF
    The XDEM multi-physics and multi-scale simulation platform roots in the Ex- tended Discrete Element Method (XDEM) and is being developed at the In- stitute of Computational Engineering at the University of Luxembourg. The platform is an advanced multi- physics simulation technology that combines flexibility and versatility to establish the next generation of multi-physics and multi-scale simulation tools. For this purpose the simulation framework relies on coupling various predictive tools based on both an Eulerian and Lagrangian approach. Eulerian approaches represent the wide field of continuum models while the Lagrange approach is perfectly suited to characterise discrete phases. Thus, continuum models include classical simulation tools such as Computa- tional Fluid Dynamics (CFD) or Finite Element Analysis (FEA) while an ex- tended configuration of the classical Discrete Element Method (DEM) addresses the discrete e.g. particulate phase. Apart from predicting the trajectories of individual particles, XDEM extends the application to estimating the thermo- dynamic state of each particle by advanced and optimised algorithms. The thermodynamic state may include temperature and species distributions due to chemical reaction and external heat sources. Hence, coupling these extended features with either CFD or FEA opens up a wide range of applications as diverse as pharmaceutical industry e.g. drug production, agriculture food and processing industry, mining, construction and agricultural machinery, metals manufacturing, energy production and systems biology

    A co-located partitions strategy for parallel CFD-DEM couplings

    Full text link
    In this work, a new partition-collocation strategy for the parallel execution of CFD--DEM couplings is investigated. Having a good parallel performance is a key issue for an Eulerian-Lagrangian software that aims to be applied to solve industrially significant problems, as the computational cost of these couplings is one of their main drawback. The approach presented here consists in co-locating the overlapping parts of the simulation domain of each software on the same MPI process, in order to reduce the cost of the data exchanges. It is shown how this strategy allows reducing memory consumption and inter-process communication between CFD and DEM to a minimum and therefore to overcome an important parallelization bottleneck identified in the literature. Three benchmarks are proposed to assess the consistency and scalability of this approach. A coupled execution on 280 cores shows that less than 0.1% of the time is used to perform inter-physics data exchange
    • …
    corecore